Goto

Collaborating Authors

 network device



Metis: Understanding and Enhancing In-Network Regular Expressions

Neural Information Processing Systems

However, REs purely rely on expert knowledge and cannot utilize labeled data for better accuracy. Today, neural networks (NNs) have shown superior accuracy and flexibility, thanks to their ability to learn from rich labeled data. Nevertheless, NNs are often incompetent in cold-start scenarios and too complex for deployment on network devices. In this paper, we propose Metis, a general framework that converts REs to network device affordable models for superior accuracy and throughput by taking advantage of REs' expert knowledge and NNs' learning ability. In Metis, we convert REs to byte-level recurrent neural networks (BRNNs) without training.



INSIGHT: A Survey of In-Network Systems for Intelligent, High-Efficiency AI and Topology Optimization

Algazinov, Aleksandr, Chandra, Joydeep, Laing, Matt

arXiv.org Artificial Intelligence

In-network computation represents a transformative approach to addressing the escalating demands of Artificial Intelligence (AI) workloads on network infrastructure. By leveraging the processing capabilities of network devices such as switches, routers, and Network Interface Cards (NICs), this paradigm enables AI computations to be performed directly within the network fabric, significantly reducing latency, enhancing throughput, and optimizing resource utilization. This paper provides a comprehensive analysis of optimizing in-network computation for AI, exploring the evolution of programmable network architectures, such as Software-Defined Networking (SDN) and Programmable Data Planes (PDPs), and their convergence with AI. It examines methodologies for mapping AI models onto resource-constrained network devices, addressing challenges like limited memory and computational capabilities through efficient algorithm design and model compression techniques. The paper also highlights advancements in distributed learning, particularly in-network aggregation, and the potential of federated learning to enhance privacy and scalability. Frameworks like Planter and Quark are discussed for simplifying development, alongside key applications such as intelligent network monitoring, intrusion detection, traffic management, and Edge AI. Future research directions, including runtime programma-bility, standardized benchmarks, and new applications paradigms, are proposed to advance this rapidly evolving field. This survey underscores the potential of in-network AI to create intelligent, efficient, and responsive networks capable of meeting the demands of next-generation AI applications.


Metis: Understanding and Enhancing In-Network Regular Expressions

Neural Information Processing Systems

However, REs purely rely on expert knowledge and cannot utilize labeled data for better accuracy. Today, neural networks (NNs) have shown superior accuracy and flexibility, thanks to their ability to learn from rich labeled data. Nevertheless, NNs are often incompetent in cold-start scenarios and too complex for deployment on network devices. In this paper, we propose Metis, a general framework that converts REs to network device affordable models for superior accuracy and throughput by taking advantage of REs' expert knowledge and NNs' learning ability. In Metis, we convert REs to byte-level recurrent neural networks (BRNNs) without training.


Towards a Dynamic Future with Adaptable Computing and Network Convergence (ACNC)

Shokrnezhad, Masoud, Yu, Hao, Taleb, Tarik, Li, Richard, Lee, Kyunghan, Song, Jaeseung, Westphal, Cedric

arXiv.org Artificial Intelligence

In the context of advancing 6G, a substantial paradigm shift is anticipated, highlighting comprehensive everything-to-everything interactions characterized by numerous connections and stringent adherence to Quality of Service/Experience (QoS/E) prerequisites. The imminent challenge stems from resource scarcity, prompting a deliberate transition to Computing-Network Convergence (CNC) as an auspicious approach for joint resource orchestration. While CNC-based mechanisms have garnered attention, their effectiveness in realizing future services, particularly in use cases like the Metaverse, may encounter limitations due to the continually changing nature of users, services, and resources. Hence, this paper presents the concept of Adaptable CNC (ACNC) as an autonomous Machine Learning (ML)-aided mechanism crafted for the joint orchestration of computing and network resources, catering to dynamic and voluminous user requests with stringent requirements. ACNC encompasses two primary functionalities: state recognition and context detection. Given the intricate nature of the user-service-computing-network space, the paper employs dimension reduction to generate live, holistic, abstract system states in a hierarchical structure. To address the challenges posed by dynamic changes, Continual Learning (CL) is employed, classifying the system state into contexts controlled by dedicated ML agents, enabling them to operate efficiently. These two functionalities are intricately linked within a closed loop overseen by the End-to-End (E2E) orchestrator to allocate resources. The paper introduces the components of ACNC, proposes a Metaverse scenario to exemplify ACNC's role in resource provisioning with Segment Routing v6 (SRv6), outlines ACNC's workflow, details a numerical analysis for efficiency assessment, and concludes with discussions on relevant challenges and potential avenues for future research.


ORIENT: A Priority-Aware Energy-Efficient Approach for Latency-Sensitive Applications in 6G

Shokrnezhad, Masoud, Taleb, Tarik

arXiv.org Artificial Intelligence

Anticipation for 6G's arrival comes with growing concerns about increased energy consumption in computing and networking. The expected surge in connected devices and resource-demanding applications presents unprecedented challenges for energy resources. While sustainable resource allocation strategies have been discussed in the past, these efforts have primarily focused on single-domain orchestration or ignored the unique requirements posed by 6G. To address this gap, we investigate the joint problem of service instance placement and assignment, path selection, and request prioritization, dubbed PIRA. The objective function is to maximize the system's overall profit as a function of the number of concurrently supported requests while simultaneously minimizing energy consumption over an extended period of time. In addition, end-to-end latency requirements and resource capacity constraints are considered for computing and networking resources, where queuing theory is utilized to estimate the Age of Information (AoI) for requests. After formulating the problem in a non-linear fashion, we prove its NP-hardness and propose a method, denoted ORIENT. This method is based on the Double Dueling Deep Q-Learning (D3QL) mechanism and leverages Graph Neural Networks (GNNs) for state encoding. Extensive numerical simulations demonstrate that ORIENT yields near-optimal solutions for varying system sizes and request counts.


Generative Adversarial Learning for Intelligent Trust Management in 6G Wireless Networks

Yang, Liu, Li, Yun, Yang, Simon X., Lu, Yinzhi, Guo, Tan, Yu, Keping

arXiv.org Artificial Intelligence

Emerging six generation (6G) is the integration of heterogeneous wireless networks, which can seamlessly support anywhere and anytime networking. But high Quality-of-Trust should be offered by 6G to meet mobile user expectations. Artificial intelligence (AI) is considered as one of the most important components in 6G. Then AI-based trust management is a promising paradigm to provide trusted and reliable services. In this article, a generative adversarial learning-enabled trust management method is presented for 6G wireless networks. Some typical AI-based trust management schemes are first reviewed, and then a potential heterogeneous and intelligent 6G architecture is introduced. Next, the integration of AI and trust management is developed to optimize the intelligence and security. Finally, the presented AI-based trust management method is applied to secure clustering to achieve reliable and real-time communications. Simulation results have demonstrated its excellent performance in guaranteeing network security and service quality.


The impact of ML and AI in security testing - JAXenter

#artificialintelligence

Artificial Intelligence (AI) has come a long way from just being a dream to becoming an integral part of our lives. From self-driving cars to smart assistants including Alexa, every industry vertical is leveraging the capabilities of AI. The software testing industry is also leveraging AI to enhance security testing efforts while automating human testing efforts. AI and ML-based security testing efforts are helping test engineers to save a lot of time while ensuring the delivery of robust security solutions for apps and enterprises. During security testing, it is essential to gather as much information as you can to increase the odds of your success.


How AI, machine learning improve real-time communications traffic

#artificialintelligence

Modern networks are causing a seismic shift in how real-time communications traverse IP networks to take the most optimal paths. Previous-generation techniques to manage traffic required static "if X, then Y" scenarios to be preprogrammed into networks on a hop-by-hop basis using legacy quality of service. But thanks to advancements in machine learning and AI, networks can take advantage of end-to-end network visibility and dynamic rerouting of data flows to dramatically improve real-time communications traffic performance and reliability. Legacy networks rely on traditional quality of service (QoS) to help improve the reliability of real-time communication data flows, such as voice and video. QoS uses a three-step process of identification, marking and policy enforcement to give preferential treatment to critical flows, including real-time streaming applications.